74,092 research outputs found

    Does the bond-stock earning yield differential model predict equity market corrections better than high P/E models?

    Get PDF
    In this paper, we extend the literature on crash prediction models in three main respects. First, we relate explicitly crash prediction measures and asset pricing models. Second, we present a simple, effective statistical significance test for crash prediction models. Finally, we propose a definition and a measure of robustness for crash prediction models. We apply the statistical test and measure the robustness of selected model specifications of the Price-Earnings (P/E) ratio and Bond Stock Earning Yield Differential (BSEYD) measures. This analysis suggests that the BSEYD, the logarithmic BSEYD model, and to a lesser extent the P/E ratio, are statistically significant robust predictors of equity market crashes

    Traffic Crash Prediction Using Machine Learning Models

    Get PDF
    Traffic crashes account for most of casualties and injuries worldwide, and there has been growing concerns and studies regarding the contributing factors of traffic crashes. There are many factors causing or related to an occurrence of traffic crash, e.g., land use, traffic flow conditions, driver behavior and weather condition. This paper studied the spatial and temporal distribution of crashes on highway and developed real-time prediction models for crash occurrence. Traffic flow data, weather data, and crash data from multiple data sources were collected and processed to develop the model. Multiple machine learning models, such as SVM model and Decision Tree model, were used as the candidate models. It was found that weather, crash time, and traffic flow shortly prior to the crash occurrence are critical impacting factors for real-time crash prediction. The candidate models have low to moderate sensitivity to predict the crash occurrences due to limited sample size. To use the models in a traffic operations environment, a prediction tool with interactive map could be developed to proactively monitor crash hot spots and prepare staffing and resources for the potential crash occurrences

    Review of crash prediction models and their applicability in black spot identification to improve road safety

    Get PDF
    Objective: This study aims to review of the development of crash prediction models, and their applications to analyse and identify black spots to improve road safety. Methods: Several modelling techniques have been reviewed in this study including, multiple linear regression, Poisson distribution, negative binomial, random effect technique, and multiple logistic regression models to identify their suitability to develop the crash prediction models. The studies related to the identification of black spots were also reviewed based on the type of crash data used in the identification process. Result: The reviewed documents highlight the shortcomings within the traditional crash prediction models (CPMs), as well as the demonstrated the flexibilities and effectiveness of the latest methods. Most suitable models can now be developed to represent the actual scenarios from several modelling techniques, where they provide a realistic and accurate prediction of crash frequency, for example, to determine if the location had a traffic safety problem compared to other locations with similar conditions and to identify the suitable measures to reduce crashes. Application/Improvements: The models identified in this research are already being used but the modelling approaches can be further modified to include the latest technical application on roads, available post-crash management system or safety culture which are commonly related road safety outcomes

    Crash risk estimation and assessment tool

    Get PDF
    Currently in Australia, there are no decision support tools for traffic and transport engineers to assess the crash risk potential of proposed road projects at design level. A selection of equivalent tools already exists for traffic performance assessment, e.g. aaSIDRA or VISSIM. The Urban Crash Risk Assessment Tool (UCRAT) was developed for VicRoads by ARRB Group to promote methodical identification of future crash risks arising from proposed road infrastructure, where safety cannot be evaluated based on past crash history. The tool will assist practitioners with key design decisions to arrive at the safest and the most cost -optimal design options. This paper details the development and application of UCRAT software. This professional tool may be used to calculate an expected mean number of casualty crashes for an intersection, a road link or defined road network consisting of a number of such elements. The mean number of crashes provides a measure of risk associated with the proposed functional design and allows evaluation of alternative options. The tool is based on historical data for existing road infrastructure in metropolitan Melbourne and takes into account the influence of key design features, traffic volumes, road function and the speed environment. Crash prediction modelling and risk assessment approaches were combined to develop its unique algorithms. The tool has application in such projects as road access proposals associated with land use developments, public transport integration projects and new road corridor upgrade proposals

    The 1929 stock market: Irving Fisher was right

    Get PDF
    Many stock market analysts think that in 1929, at the time of the crash, stocks were overvalued. Irving Fisher argued just before the crash that fundamentals were strong and the stock market was undervalued. In this paper, we use growth theory to estimate the fundamental value of corporate equity and compare it to actual stock valuations. Our estimate is based on values of productive corporate capital, both tangible and intangible, and tax rates on corporate income and distributions. The evidence strongly suggests that Fisher was right. Even at the 1929 peak, stocks were undervalued relative to the prediction of theory.Depressions ; Stock market

    Applying Machine Learning Techniques to Analyze the Pedestrian and Bicycle Crashes at the Macroscopic Level

    Get PDF
    This thesis presents different data mining/machine learning techniques to analyze the vulnerable road users\u27 (i.e., pedestrian and bicycle) crashes by developing crash prediction models at macro-level. In this study, we developed data mining approach (i.e., decision tree regression (DTR) models) for both pedestrian and bicycle crash counts. To author knowledge, this is the first application of DTR models in the growing traffic safety literature at macro-level. The empirical analysis is based on the Statewide Traffic Analysis Zones (STAZ) level crash count data for both pedestrian and bicycle from the state of Florida for the year of 2010 to 2012. The model results highlight the most significant predictor variables for pedestrian and bicycle crash count in terms of three broad categories: traffic, roadway, and socio demographic characteristics. Furthermore, spatial predictor variables of neighboring STAZ were utilized along with the targeted STAZ variables in order to improve the prediction accuracy of both DTR models. The DTR model considering spatial predictor variables (spatial DTR model) were compared without considering spatial predictor variables (aspatial DTR model) and the models comparison results clearly found that spatial DTR model is superior model compared to aspatial DTR model in terms of prediction accuracy. Finally, this study contributed to the safety literature by applying three ensemble techniques (Bagging, Random Forest, and Boosting) in order to improve the prediction accuracy of weak learner (DTR models) for macro-level crash count. The model\u27s estimation result revealed that all the ensemble technique performed better than the DTR model and the gradient boosting technique outperformed other competing ensemble technique in macro-level crash prediction model

    A Statistical Analysis of Log-Periodic Precursors to Financial Crashes

    Full text link
    Motivated by the hypothesis that financial crashes are macroscopic examples of critical phenomena associated with a discrete scaling symmetry, we reconsider the evidence of log-periodic precursors to financial crashes and test the prediction that log-periodic oscillations in a financial index are embedded in the mean function of this index. In particular, we examine the first differences of the logarithm of the S&P 500 prior to the October 87 crash and find the log-periodic component of this time series is not statistically significant if we exclude the last year of data before the crash. We also examine the claim that two separate mechanisms are responsible for draw downs in the S&P 500 and find the evidence supporting this claim to be unconvincing.Comment: 26 pages, 10 figures, figures are incorporated into paper, some changes to the text have been mad

    The Effects of Inaccurate and Missing Highway-Rail Grade Crossing Inventory Data on Crash Model Estimation and Crash Prediction

    Get PDF
    ABSTRACT: Most highway-rail grade crossing (HRGC) crash models in the US rely on the Federal Railroad Administration’s (FRA) highway/rail crossing inventory database. Any errors and/or incomplete information in this database affects the estimated crash model parameters and subsequent crash predictions. Using 560 HRGCs in Nebraska, this study illustrates differences in crash predictions obtained from the FRA’s new (2020) Accident Prediction and Severity (APS) model when: 1) using the unaltered, original FRA HRGC inventory dataset as input, and 2) using a field-validated inventory dataset for those 560 HRGCs as input to the new APS model. Results showed that the predictions using the two different input datasets were statistically significantly different. HRGC hazard rankings from the two predictions as well as FRA’s Web Accident Prediction System (WBAPS) were different from each other. Estimation of new zero-inflated negative binomial models using 5-year reported HRGC crashes and the two inventory datasets for the 560 HRGCs enabled model parameter estimate and marginal value comparisons showing differences in estimated coefficients’ expected-magnitudes and average marginal effects. The conclusions were that erroneous and missing data in the unaltered FRA HRGC inventory dataset led to statistically different crash predictions compared to corrected and complete (field validated) HRGC inventory dataset and estimated crash prediction model parameters and their respective marginal values were different for comparative models based on the two different HRGC inventory datasets

    Housing Market Crash Prediction Using Machine Learning and Historical Data

    Get PDF
    The 2008 housing crisis was caused by faulty banking policies and the use of credit derivatives of mortgages for investment purposes. In this project, we look into datasets that are the markers to a typical housing crisis. Using those data sets we build three machine learning techniques which are, Linear regression, Hidden Markov Model, and Long Short-Term Memory. After building the model we did a comparative study to show the prediction done by each model. The linear regression model did not predict a housing crisis, instead, it showed that house prices would be rising steadily and the R-squared score of the model is 0.76. The Hidden Markov Model predicted a fall in the house prices and the R-squared score for this model is 0.706. Lastly, the Long Short-Term Memory showed that the house price would fall briefly but would stabilize after that. Also, fall is not as sharp as what was predicted by the HMM model. The R- squared scored for this model is 0.9, which is the highest among all other models. Although the R-squared score doesn’t say how accurate a model it definitely says how closely a model fits the data. From our model R-square score the model that best fits the data was LSTM. As the dataset used in all the models are the same therefore it is safe to say the prediction made by LSTM is better than the other ones

    How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    Get PDF
    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions
    • …
    corecore